522 research outputs found

    Deep Neural Network with l2-norm Unit for Brain Lesions Detection

    Full text link
    Automated brain lesions detection is an important and very challenging clinical diagnostic task because the lesions have different sizes, shapes, contrasts, and locations. Deep Learning recently has shown promising progress in many application fields, which motivates us to apply this technology for such important problem. In this paper, we propose a novel and end-to-end trainable approach for brain lesions classification and detection by using deep Convolutional Neural Network (CNN). In order to investigate the applicability, we applied our approach on several brain diseases including high and low-grade glioma tumor, ischemic stroke, Alzheimer diseases, by which the brain Magnetic Resonance Images (MRI) have been applied as an input for the analysis. We proposed a new operating unit which receives features from several projections of a subset units of the bottom layer and computes a normalized l2-norm for next layer. We evaluated the proposed approach on two different CNN architectures and number of popular benchmark datasets. The experimental results demonstrate the superior ability of the proposed approach.Comment: Accepted for presentation in ICONIP-201

    HeMIS: Hetero-Modal Image Segmentation

    Full text link
    We introduce a deep learning image segmentation framework that is extremely robust to missing imaging modalities. Instead of attempting to impute or synthesize missing data, the proposed approach learns, for each modality, an embedding of the input image into a single latent vector space for which arithmetic operations (such as taking the mean) are well defined. Points in that space, which are averaged over modalities available at inference time, can then be further processed to yield the desired segmentation. As such, any combinatorial subset of available modalities can be provided as input, without having to learn a combinatorial number of imputation models. Evaluated on two neurological MRI datasets (brain tumors and MS lesions), the approach yields state-of-the-art segmentation results when provided with all modalities; moreover, its performance degrades remarkably gracefully when modalities are removed, significantly more so than alternative mean-filling or other synthesis approaches.Comment: Accepted as an oral presentation at MICCAI 201

    Learning Data Augmentation for Brain Tumor Segmentation with Coarse-to-Fine Generative Adversarial Networks

    Full text link
    There is a common belief that the successful training of deep neural networks requires many annotated training samples, which are often expensive and difficult to obtain especially in the biomedical imaging field. While it is often easy for researchers to use data augmentation to expand the size of training sets, constructing and generating generic augmented data that is able to teach the network the desired invariance and robustness properties using traditional data augmentation techniques is challenging in practice. In this paper, we propose a novel automatic data augmentation method that uses generative adversarial networks to learn augmentations that enable machine learning based method to learn the available annotated samples more efficiently. The architecture consists of a coarse-to-fine generator to capture the manifold of the training sets and generate generic augmented data. In our experiments, we show the efficacy of our approach on a Magnetic Resonance Imaging (MRI) image, achieving improvements of 3.5% Dice coefficient on the BRATS15 Challenge dataset as compared to traditional augmentation approaches. Also, our proposed method successfully boosts a common segmentation network to reach the state-of-the-art performance on the BRATS15 Challenge

    Automated claustrum segmentation in human brain MRI using deep learning

    Get PDF
    In the last two decades, neuroscience has produced intriguing evidence for a central role of the claustrum in mammalian forebrain structure and function. However, relatively few in vivo studies of the claustrum exist in humans. A reason for this may be the delicate and sheet-like structure of the claustrum lying between the insular cortex and the putamen, which makes it not amenable to conventional segmentation methods. Recently, Deep Learning (DL) based approaches have been successfully introduced for automated segmentation of complex, subcortical brain structures. In the following, we present a multi-view DL-based approach to segment the claustrum in T1-weighted MRI scans. We trained and evaluated the proposed method in 181 individuals, using bilateral manual claustrum annotations by an expert neuroradiologist as reference standard. Cross-validation experiments yielded median volumetric similarity, robust Hausdorff distance, and Dice score of 93.3%, 1.41 mm, and 71.8%, respectively, representing equal or superior segmentation performance compared to human intra-rater reliability. The leave-one-scanner-out evaluation showed good transferability of the algorithm to images from unseen scanners at slightly inferior performance. Furthermore, we found that DL-based claustrum segmentation benefits from multi-view information and requires a sample size of around 75 MRI scans in the training set. We conclude that the developed algorithm allows for robust automated claustrum segmentation and thus yields considerable potential for facilitating MRI-based research of the human claustrum. The software and models of our method are made publicly available

    Physiology-based simulation of the retinal vasculature enables annotation-free segmentation of OCT angiographs

    Full text link
    Optical coherence tomography angiography (OCTA) can non-invasively image the eye's circulatory system. In order to reliably characterize the retinal vasculature, there is a need to automatically extract quantitative metrics from these images. The calculation of such biomarkers requires a precise semantic segmentation of the blood vessels. However, deep-learning-based methods for segmentation mostly rely on supervised training with voxel-level annotations, which are costly to obtain. In this work, we present a pipeline to synthesize large amounts of realistic OCTA images with intrinsically matching ground truth labels; thereby obviating the need for manual annotation of training data. Our proposed method is based on two novel components: 1) a physiology-based simulation that models the various retinal vascular plexuses and 2) a suite of physics-based image augmentations that emulate the OCTA image acquisition process including typical artifacts. In extensive benchmarking experiments, we demonstrate the utility of our synthetic data by successfully training retinal vessel segmentation algorithms. Encouraged by our method's competitive quantitative and superior qualitative performance, we believe that it constitutes a versatile tool to advance the quantitative analysis of OCTA images

    Identifying chromophore fingerprints of brain tumor tissue on hyperspectral imaging using principal component analysis

    Get PDF
    Hyperspectral imaging (HSI) is an optical technique that processes the electromagnetic spectrum at a multitude of monochromatic, adjacent frequency bands. The wide-bandwidth spectral signature of a target object's reflectance allows fingerprinting its physical, biochemical, and physiological properties. HSI has been applied for various applications, such as remote sensing and biological tissue analysis. Recently, HSI was also used to differentiate between healthy and pathological tissue under operative conditions in a surgery room on patients diagnosed with brain tumors. In this article, we perform a statistical analysis of the brain tumor patients' HSI scans from the HELICoiD dataset with the aim of identifying the correlation between reflectance spectra and absorption spectra of tissue chromophores. By using the principal component analysis (PCA), we determine the most relevant spectral features for intra- and inter-tissue class differentiation. Furthermore, we demonstrate that such spectral features are correlated with the spectra of cytochrome, i.e., the chromophore highly involved in (hyper) metabolic processes. Identifying such fingerprints of chromophores in reflectance spectra is a key step for automated molecular profiling and, eventually, expert-free biomarker discovery

    Image-based modeling of tumor growth in patients with glioma.

    Get PDF
    International audienceno abstrac

    Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction

    Full text link
    Deep learning for regression tasks on medical imaging data has shown promising results. However, compared to other approaches, their power is strongly linked to the dataset size. In this study, we evaluate 3D-convolutional neural networks (CNNs) and classical regression methods with hand-crafted features for survival time regression of patients with high grade brain tumors. The tested CNNs for regression showed promising but unstable results. The best performing deep learning approach reached an accuracy of 51.5% on held-out samples of the training set. All tested deep learning experiments were outperformed by a Support Vector Classifier (SVC) using 30 radiomic features. The investigated features included intensity, shape, location and deep features. The submitted method to the BraTS 2018 survival prediction challenge is an ensemble of SVCs, which reached a cross-validated accuracy of 72.2% on the BraTS 2018 training set, 57.1% on the validation set, and 42.9% on the testing set. The results suggest that more training data is necessary for a stable performance of a CNN model for direct regression from magnetic resonance images, and that non-imaging clinical patient information is crucial along with imaging information.Comment: Contribution to The International Multimodal Brain Tumor Segmentation (BraTS) Challenge 2018, survival prediction tas

    On astrophysical solution to ultra high energy cosmic rays

    Full text link
    We argue that an astrophysical solution to UHECR problem is viable. The pectral features of extragalactic protons interacting with CMB are calculated in model-independent way. Using the power-law generation spectrum Eγg\propto E^{-\gamma_g} as the only assumption, we analyze four features of the proton spectrum: the GZK cutoff, dip, bump and the second dip. We found the dip, induced by electron-positron production on CMB, as the most robust feature, existing in energy range 1×10184×10191\times 10^{18} - 4\times 10^{19} eV. Its shape is stable relative to various phenomena included in calculations. The dip is well confirmed by observations of AGASA, HiRes, Fly's Eye and Yakutsk detectors. The best fit is reached at γg=2.7\gamma_g =2.7, with the allowed range 2.55 - 2.75. The dip is used for energy calibration of the detectors. After the energy calibration the fluxes and spectra of all three detectors agree perfectly, with discrepancy between AGASA and HiRes at E>1×1020E> 1\times 10^{20} eV being not statistically significant. The agreement of the dip with observations should be considered as confirmation of UHE proton interaction with CMB. The dip has two flattenings. The high energy flattening at E1×1019E \approx 1\times 10^{19} eV automatically explains ankle. The low-energy flattening at E1×1018E \approx 1\times 10^{18} eV provides the transition to galactic cosmic rays. This transition is studied quantitatively. The UHECR sources, AGN and GRBs, are studied in a model-dependent way, and acceleration is discussed. Based on the agreement of the dip with existing data, we make the robust prediction for the spectrum at 1×10181×10201\times 10^{18} - 1\times 10^{20} eV to be measured in the nearest future by Auger detector.Comment: Revised version as published in Phys.Rev. D47 (2006) 043005 with a small additio
    corecore